Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Hum Behav ; 2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38740990

RESUMEN

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.

2.
Curr Opin Psychol ; 55: 101739, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38091666

RESUMEN

Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people's engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages-source and information selection-as relatively neglected processes that should be addressed to further improve people's ability to contend with misinformation.


Asunto(s)
Comunicación , Internet , Humanos , Desinformación , Medios de Comunicación Sociales
3.
Curr Opin Psychol ; 56: 101775, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38101247

RESUMEN

Although cancer might seem like a niche subject, we argue that it is a model topic for misinformation researchers, and an ideal area of application given its importance for society. We first discuss the prevalence of cancer misinformation online and how it has the potential to cause harm. We next examine the financial incentives for those who profit from disinformation dissemination, how people with cancer are a uniquely vulnerable population, and why trust in science and medical professionals is particularly relevant to this topic. We finally discuss how belief in cancer misinformation has clear objective consequences and can be measured with treatment adherence and health outcomes such as mortality. In sum, cancer misinformation could assist the characterization of misinformation beliefs and be used to develop tools to combat misinformation in general.


Asunto(s)
Neoplasias , Humanos , Confianza , Poblaciones Vulnerables
4.
Perspect Psychol Sci ; : 17456916231186779, 2023 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-38010888

RESUMEN

It is critical to understand how algorithms structure the information people see and how those algorithms support or undermine society's core values. We offer a normative framework for the assessment of the information curation algorithms that determine much of what people see on the internet. The framework presents two levels of assessment: one for individual-level effects and another for systemic effects. With regard to individual-level effects we discuss whether (a) the information is aligned with the user's interests, (b) the information is accurate, and (c) the information is so appealing that it is difficult for a person's self-regulatory resources to ignore ("agency hacking"). At the systemic level we discuss whether (a) there are adverse civic-level effects on a system-level variable, such as political polarization; (b) there are negative distributional or discriminatory effects; and (c) there are anticompetitive effects, with the information providing an advantage to the platform. The objective of this framework is both to inform the direction of future scholarship as well as to offer tools for intervention for policymakers.

5.
J Appl Res Mem Cogn ; 12(3): 325-334, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37829768

RESUMEN

Corrected misinformation can continue to influence inferential reasoning. It has been suggested that such continued influence is partially driven by misinformation familiarity, and that corrections should therefore avoid repeating misinformation to avoid inadvertent strengthening of misconceptions. However, evidence for such familiarity-backfire effects is scarce. We tested whether familiarity backfire may occur if corrections are processed under cognitive load. Although misinformation repetition may boost familiarity, load may impede integration of the correction, reducing its effectiveness and therefore allowing a backfire effect to emerge. Participants listened to corrections that repeated misinformation while in a driving simulator. Misinformation familiarity was manipulated through the number of corrections. Load was manipulated through a math task administered selectively during correction encoding. Multiple corrections were more effective than a single correction; cognitive load reduced correction effectiveness, with a single correction entirely ineffective under load. This provides further evidence against familiarity-backfire effects and has implications for real-world debunking.

6.
Cogn Res Princ Implic ; 8(1): 39, 2023 07 03.
Artículo en Inglés | MEDLINE | ID: mdl-37395864

RESUMEN

Corrections are a frequently used and effective tool for countering misinformation. However, concerns have been raised that corrections may introduce false claims to new audiences when the misinformation is novel. This is because boosting the familiarity of a claim can increase belief in that claim, and thus exposing new audiences to novel misinformation-even as part of a correction-may inadvertently increase misinformation belief. Such an outcome could be conceptualized as a familiarity backfire effect, whereby a familiarity boost increases false-claim endorsement above a control-condition or pre-correction baseline. Here, we examined whether standalone corrections-that is, corrections presented without initial misinformation exposure-can backfire and increase participants' reliance on the misinformation in their subsequent inferential reasoning, relative to a no-misinformation, no-correction control condition. Across three experiments (total N = 1156) we found that standalone corrections did not backfire immediately (Experiment 1) or after a one-week delay (Experiment 2). However, there was some mixed evidence suggesting corrections may backfire when there is skepticism regarding the correction (Experiment 3). Specifically, in Experiment 3, we found the standalone correction to backfire in open-ended responses, but only when there was skepticism towards the correction. However, this did not replicate with the rating scales measure. Future research should further examine whether skepticism towards the correction is the first replicable mechanism for backfire effects to occur.


Asunto(s)
Comunicación , Reconocimiento en Psicología , Humanos , Solución de Problemas
7.
PLoS One ; 18(4): e0281140, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37043493

RESUMEN

Individuals often continue to rely on misinformation in their reasoning and decision making even after it has been corrected. This is known as the continued influence effect, and one of its presumed drivers is misinformation familiarity. As continued influence can promote misguided or unsafe behaviours, it is important to find ways to minimize the effect by designing more effective corrections. It has been argued that correction effectiveness is reduced if the correction repeats the to-be-debunked misinformation, thereby boosting its familiarity. Some have even suggested that this familiarity boost may cause a correction to inadvertently increase subsequent misinformation reliance; a phenomenon termed the familiarity backfire effect. A study by Pluviano et al. (2017) found evidence for this phenomenon using vaccine-related stimuli. The authors found that repeating vaccine "myths" and contrasting them with corresponding facts backfired relative to a control condition, ironically increasing false vaccine beliefs. The present study sought to replicate and extend this study. We included four conditions from the original Pluviano et al. study: the myths vs. facts, a visual infographic, a fear appeal, and a control condition. The present study also added a "myths-only" condition, which simply repeated false claims and labelled them as false; theoretically, this condition should be most likely to produce familiarity backfire. Participants received vaccine-myth corrections and were tested immediately post-correction, and again after a seven-day delay. We found that the myths vs. facts condition reduced vaccine misconceptions. None of the conditions increased vaccine misconceptions relative to control at either timepoint, or relative to a pre-intervention baseline; thus, no backfire effects were observed. This failure to replicate adds to the mounting evidence against familiarity backfire effects and has implications for vaccination communications and the design of debunking interventions.


Asunto(s)
Reconocimiento en Psicología , Vacunas , Humanos , Comunicación , Vacunación , Miedo
8.
R Soc Open Sci ; 10(2): 220508, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36756068

RESUMEN

In recent years, the UK has become divided along two key dimensions: party affiliation and Brexit position. We explored how division along these two dimensions interacts with the correction of political misinformation. Participants saw accurate and inaccurate statements (either balanced or mostly inaccurate) from two politicians from opposing parties but the same Brexit position (Experiment 1), or the same party but opposing Brexit positions (Experiment 2). Replicating previous work, fact-checking statements led participants to update their beliefs, increasing belief after fact affirmations and decreasing belief for corrected misinformation, even for politically aligned material. After receiving fact-checks participants had reduced voting intentions and more negative feelings towards party-aligned politicians (likely due to low baseline support for opposing party politicians). For Brexit alignment, the opposite was found: participants reduced their voting intentions and feelings for opposing (but not aligned) politicians following the fact-checks. These changes occurred regardless of the proportion of inaccurate statements, potentially indicating participants expect politicians to be accurate more than half the time. Finally, although we found division based on both party and Brexit alignment, effects were much stronger for party alignment, highlighting that even though new divisions have emerged in UK politics, the old divides remain dominant.

9.
Cancer Med ; 12(7): 8871-8879, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36659856

RESUMEN

BACKGROUND: Previous research has found that individuals may travel outside their home countries in pursuit of alternative cancer therapies (ACT). The goal of this study is to compare individuals in the United States who propose plans for travel abroad for ACT, compared with individuals who seek ACT domestically. METHODS: Clinical and treatment data were extracted from campaign descriptions of 615 GoFundMe® campaigns fundraising for individuals in the United States seeking ACT between 2011 and 2019. We examined treatment modalities, treatment location, fundraising metrics, and online engagement within campaign profiles. Clinical and demographic differences between those who proposed international travel and those who sought ACT domestically were examined using two-sided Fisher's exact tests. Differences in financial and social engagement data were examined using two-sided Mann-Whitney tests. RESULTS: Of the total 615 campaigns, 237 (38.5%) mentioned plans to travel internationally for ACT, with the majority (81.9%) pursuing travel to Mexico. Campaigns that proposed international treatment requested more money ($35,000 vs. $22,650, p < 0.001), raised more money ($7833 vs. $5035, p < 0.001), had more donors (57 vs. 45, p = 0.02), and were shared more times (377 vs. 290.5, p = 0.008) compared to campaigns that did not. The median financial shortfall was greater for campaigns pursuing treatments internationally (-$22,640 vs. -$13,436, p < 0.003). CONCLUSIONS: Campaigns proposing international travel for ACT requested and received more money, were shared more online, and had more donors. However, there was significantly more unmet financial need among this group, highlighting potential financial toxicity on patients and families.


Asunto(s)
Colaboración de las Masas , Obtención de Fondos , Turismo Médico , Neoplasias , Humanos , Estados Unidos , Neoplasias/epidemiología , Neoplasias/terapia , Demografía
10.
Cognition ; 230: 105276, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36174261

RESUMEN

After misinformation has been corrected, people initially update their belief extremely well. However, this change is rarely sustained over time, with belief returning towards pre-correction levels. This is called belief regression. The current study aimed to examine the association between memory for the correction and belief regression, and whether corrected misinformation suffers from belief regression more than affirmed facts. Participants from Prolific Academic (N = 612) rated the veracity of 16 misinformation and 16 factual items and were randomly assigned to a correction condition or test-retest control. Immediately after misinformation was corrected and facts affirmed, participants re-rated their belief and were asked whether they could remember the items' presented veracity. Participants repeated this post-test one month later. We found that belief and memory were highly associated, both immediately (⍴ = 0.51), and after one month (⍴ = 0.82), and that memory explained 66% of the variance in belief regression after correcting for measurement reliability. We found the rate of dissenting (accurately remembering that misinformation was presented as false but still believing it) remained stable between the immediate and delayed post-test, while the rate of forgetting quadrupled. After one month, 57% of participants who believed in the misinformation thought that the items were presented to them as true. Belief regression was more pronounced for misinformation than facts, but this was greatly attenuated once pre-test belief was equated. Together, these results clearly indicate that memory plays a fundamental role in belief regression, and that repeated corrections could be an effective method to counteract this phenomenon.


Asunto(s)
Comunicación , Recuerdo Mental , Humanos , Reproducibilidad de los Resultados
11.
J Exp Psychol Gen ; 151(7): 1655-1665, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35130012

RESUMEN

The backfire effect is when a correction increases belief in the very misconception it is attempting to correct, and it is often used as a reason not to correct misinformation. The current study aimed to test whether correcting misinformation increases belief more than a no-correction control. Furthermore, we aimed to examine whether item-level differences in backfire rates were associated with test-retest reliability or theoretically meaningful factors. These factors included worldview-related attributes, including perceived importance and strength of precorrection belief, and familiarity-related attributes, including perceived novelty and the illusory truth effect. In 2 nearly identical experiments, we conducted a longitudinal pre/post design with N = 388 and 532 participants. Participants rated 21 misinformation items and were assigned to a correction condition or test-retest control. We found that no items backfired more in the correction condition compared to test-retest control or initial belief ratings. Item backfire rates were strongly negatively correlated with item reliability (ρ = -.61/-.73) and did not correlate with worldview-related attributes. Familiarity-related attributes were significantly correlated with backfire rate, though they did not consistently account for unique variance beyond reliability. While there have been previous papers highlighting the nonreplicable nature of backfire effects, the current findings provide a potential mechanism for this poor replicability. It is crucial for future research into backfire effects to use reliable measures, report the reliability of their measures, and take reliability into account in analyses. Furthermore, fact-checkers and communicators should not avoid giving corrective information due to backfire concerns. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Comunicación , Reconocimiento en Psicología , Humanos , Reproducibilidad de los Resultados
12.
Ann Am Acad Pol Soc Sci ; 700(1): 124-135, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37936790

RESUMEN

The public often turns to science for accurate health information, which, in an ideal world, would be error free. However, limitations of scientific institutions and scientific processes can sometimes amplify misinformation and disinformation. The current review examines four mechanisms through which this occurs: (1) predatory journals that accept publications for monetary gain but do not engage in rigorous peer review; (2) pseudoscientists who provide scientific-sounding information but whose advice is inaccurate, unfalsifiable, or inconsistent with the scientific method; (3) occasions when legitimate scientists spread misinformation or disinformation; and (4) miscommunication of science by the media and other communicators. We characterize this article as a "call to arms," given the urgent need for the scientific information ecosystem to improve. Improvements are necessary to maintain the public's trust in science, foster robust discourse, and encourage a well-educated citizenry.

13.
J Natl Cancer Inst ; 114(7): 1036-1039, 2022 07 11.
Artículo en Inglés | MEDLINE | ID: mdl-34291289

RESUMEN

There are few data on the quality of cancer treatment information available on social media. Here, we quantify the accuracy of cancer treatment information on social media and its potential for harm. Two cancer experts reviewed 50 of the most popular social media articles on each of the 4 most common cancers. The proportion of misinformation and potential for harm were reported for all 200 articles and their association with the number of social media engagements using a 2-sample Wilcoxon rank-sum test. All statistical tests were 2-sided. Of 200 total articles, 32.5% (n = 65) contained misinformation and 30.5% (n = 61) contained harmful information. Among articles containing misinformation, 76.9% (50 of 65) contained harmful information. The median number of engagements for articles with misinformation was greater than factual articles (median [interquartile range] = 2300 [1200-4700] vs 1600 [819-4700], P = .05). The median number of engagements for articles with harmful information was statistically significantly greater than safe articles (median [interquartile range] = 2300 [1400-4700] vs 1500 [810-4700], P = .007).


Asunto(s)
Neoplasias , Medios de Comunicación Sociales , Comunicación , Humanos , Neoplasias/terapia
15.
Cogn Res Princ Implic ; 6(1): 83, 2021 12 29.
Artículo en Inglés | MEDLINE | ID: mdl-34964924

RESUMEN

Given that being misinformed can have negative ramifications, finding optimal corrective techniques has become a key focus of research. In recent years, several divergent correction formats have been proposed as superior based on distinct theoretical frameworks. However, these correction formats have not been compared in controlled settings, so the suggested superiority of each format remains speculative. Across four experiments, the current paper investigated how altering the format of corrections influences people's subsequent reliance on misinformation. We examined whether myth-first, fact-first, fact-only, or myth-only correction formats were most effective, using a range of different materials and participant pools. Experiments 1 and 2 focused on climate change misconceptions; participants were Qualtrics online panel members and students taking part in a massive open online course, respectively. Experiments 3 and 4 used misconceptions from a diverse set of topics, with Amazon Mechanical Turk crowdworkers and university student participants. We found that the impact of a correction on beliefs and inferential reasoning was largely independent of the specific format used. The clearest evidence for any potential relative superiority emerged in Experiment 4, which found that the myth-first format was more effective at myth correction than the fact-first format after a delayed retention interval. However, in general it appeared that as long as the key ingredients of a correction were presented, format did not make a considerable difference. This suggests that simply providing corrective information, regardless of format, is far more important than how the correction is presented.


Asunto(s)
Comunicación , Solución de Problemas , Recolección de Datos , Ácido Dioctil Sulfosuccínico , Humanos , Vehículos Farmacéuticos
16.
J Appl Res Mem Cogn ; 9(3): 286-299, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32905023

RESUMEN

One of the most concerning notions for science communicators, fact-checkers, and advocates of truth, is the backfire effect; this is when a correction leads to an individual increasing their belief in the very misconception the correction is aiming to rectify. There is currently a debate in the literature as to whether backfire effects exist at all, as recent studies have failed to find the phenomenon, even under theoretically favorable conditions. In this review, we summarize the current state of the worldview and familiarity backfire effect literatures. We subsequently examine barriers to measuring the backfire phenomenon, discuss approaches to improving measurement and design, and conclude with recommendations for fact-checkers. We suggest that backfire effects are not a robust empirical phenomenon, and more reliable measures, powerful designs, and stronger links between experimental design and theory could greatly help move the field ahead.

18.
Annu Rev Public Health ; 41: 433-451, 2020 04 02.
Artículo en Inglés | MEDLINE | ID: mdl-31874069

RESUMEN

The internet has become a popular resource to learn about health and to investigate one's own health condition. However, given the large amount of inaccurate information online, people can easily become misinformed. Individuals have always obtained information from outside the formal health care system, so how has the internet changed people's engagement with health information? This review explores how individuals interact with health misinformation online, whether it be through search, user-generated content, or mobile apps. We discuss whether personal access to information is helping or hindering health outcomes and how the perceived trustworthiness of the institutions communicating health has changed over time. To conclude, we propose several constructive strategies for improving the online information ecosystem. Misinformation concerning health has particularly severe consequences with regard to people's quality of life and even their risk of mortality; therefore, understanding it within today's modern context is an extremely important task.


Asunto(s)
Comunicación , Exactitud de los Datos , Difusión de la Información , Alfabetización Informacional , Internet/estadística & datos numéricos , Salud Pública/estadística & datos numéricos , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad
19.
Science ; 363(6425): 374-378, 2019 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-30679368

RESUMEN

The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...